Pseudo-Maximization and Self-Normalized Processes*

نویسندگان

  • Victor H. de la Peña
  • Michael J. Klass
  • Tze Leung Lai
چکیده

Abstract: Self-normalized processes are basic to many probabilistic and statistical studies. They arise naturally in the the study of stochastic integrals, martingale inequalities and limit theorems, likelihood-based methods in hypothesis testing and parameter estimation, and Studentized pivots and bootstrap-t methods for confidence intervals. In contrast to standard normalization, large values of the observations play a lesser role as they appear both in the numerator and its self-normalized denominator, thereby making the process scale invariant and contributing to its robustness. Herein we survey a number of results for self-normalized processes in the case of dependent variables and describe a key method called “pseudo-maximization” that has been used to derive these results. In the multivariate case, selfnormalization consists of multiplying by the inverse of a positive definite matrix (instead of dividing by a positive random variable as in the scalar case) and is ubiquitous in statistical applications, examples of which are given.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Pseudo-spectral ‎M‎atrix and Normalized Grunwald Approximation for Numerical Solution of Time Fractional Fokker-Planck Equation

This paper presents a new numerical method to solve time fractional Fokker-Planck equation. The space dimension is discretized to the Gauss-Lobatto points, then we apply pseudo-spectral successive integration matrix for this dimension. This approach shows that with less number of points, we can approximate the solution with more accuracy. The numerical results of the examples are displayed.

متن کامل

Self-Organization by Optimizing Free-Energy

We present a variational Expectation-Maximization algorithm to learn probabilistic mixture models. The algorithm is similar to Kohonen’s Self-Organizing Map algorithm and not limited to Gaussian mixtures. We maximize the variational free-energy that sums data loglikelihood and Kullback-Leibler divergence between a normalized neighborhood function and the posterior distribution on the components...

متن کامل

Persistency for higher-order pseudo-boolean maximization

A pseudo-Boolean function is a function from a 0/1-vector to the reals. Minimizing pseudo-Boolean functions is a very general problem with many applications. In image analysis, the problem arises in segmentation or as a subroutine in task like stero estimation and image denoising. Recent years have seen an increased interest in higher-degree problems, as opposed to quadratic pseudo-Boolean func...

متن کامل

Improved inter-modality image registration using normalized mutual information with coarse-binned histograms

In this paper we extend the method of inter-modality image registration using the maximization of normalized mutual information (NMI) for the registration of [18F]-2-fluoro-deoxy-D-glucose (FDG)positron emission tomography (PET) with T1-weighted magnetic resonance (MR) volumes. We investigate the impact on the NMI maximization with respect to using coarse-to-fine grained B-spline bases and to t...

متن کامل

Dopaminergic Balance between Reward Maximization and Policy Complexity

Previous reinforcement-learning models of the basal ganglia network have highlighted the role of dopamine in encoding the mismatch between prediction and reality. Far less attention has been paid to the computational goals and algorithms of the main-axis (actor). Here, we construct a top-down model of the basal ganglia with emphasis on the role of dopamine as both a reinforcement learning signa...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007